📚 node [[l2_regularization|l2 regularization]]
Welcome! Nobody has contributed anything to 'l2_regularization|l2 regularization' yet. You can:
-
Write something in the document below!
- There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
- Write to the Agora from social media.
-
Sign up as a full Agora user.
- As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[l2_regularization]]
⥅ node [[l2_regularization]] pulled by Agora
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/L2_Regularization.md by @KGBicheno
L2 regularization
Go back to the [[AI Glossary]]
A type of regularization that penalizes weights in proportion to the sum of the squares of the weights. L2 regularization helps drive outlier weights (those with high positive or low negative values) closer to 0 but not quite to 0. (Contrast with L1 regularization.) L2 regularization always improves generalization in linear models.
📖 stoas
- public document at doc.anagora.org/l2_regularization|l2-regularization
- video call at meet.jit.si/l2_regularization|l2-regularization
🔎 full text search for 'l2_regularization|l2 regularization'